home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Ian & Stuart's Australian Mac: Not for Sale
/
Another.not.for.sale (Australia).iso
/
hold me in your arms
/
ES Cyberama
/
Artifice and Intelligence
< prev
next >
Wrap
Text File
|
1993-11-24
|
17KB
|
371 lines
Arena Magazine No.3 February-March 1993
FEATURES
ARTIFICE AND INTELLIGENCE
Alan Roberts
The media regularly call our attention to the heavy tread of
metallic feet: the robots, we are told, are moving in on us.
They will be intelligent robots, their micro-processor
brains performing billions of calculations each second, with
swift access to thousands of billions of pieces of data in
their memories. The coming developments in Artificial
Intelligence (AI) may mean that, as a species, we have
created our successors . . .
Will the computers, mobile or immobile, surpass all human
skills in the near future or medium future, then? Are
craftspeople, for instance, doomed to the fate of the Indian
handloom weavers of the last century - will their bones
bleach the plains? The answer is No. But before seeing
why, we should try to sum up the present status of work in
AI. The current achievements of AI are small but promising:
its present and coming performance in strictly delimited
fields, with expert systems in particular, is even more
promising. But AI holds no promise whatsoever for
eventually eclipsing human intelligence. To see more
support for this last assertion than can be given here, the
best introduction is probably two books (What Computers
Can't Do, and Mind Over Machine) by the Dreyfus brothers.
This dynamic duo comprises two Berkeley professors, one
(Hubert) a philosopher and the other (Stuart) in Industrial
Engineering and Operations Research. They have been
puncturing various AI balloons for a decade or two now -
releasing, appropriately enough, large volumes of hot gas.
The leading figures in AI research have certainly seen their
work as opening up an apparently unlimited range of human
activity to the machine, making craft, apparently, mere
child's play. Herbert Simon, a Nobel Prize winner, has
declared that '...within ten years most theories in
psychology will take the form of computer programs'. Thus
equipped with the laws of human behaviour, telling it which
stimulus will produce a desired response, a computer-
directed robot should certainly be able to toss off a
striking tapestry or two. Or a million... Simon again:
'Machines will be capable, within twenty years, of doing any
work that a man can do'. Another leader, Marvin Minsky from
MIT, has predicted that 'within a generation, the problem of
creating "artificial intelligence" will be substantially
solved'.
We can hardly fail to be impressed by the clarity of these
statements, and by their courage in laying down definite
time limits: within ten years, within twenty years, within a
generation. Before declaring ourselves and our species
obsolete, however, we should note when these three
predictions were made: in 1957, 1965 and 1967 respectively.
Not one has been realized. Not one has even come much
closer to fulfilment. When the media cover the issue, they
tend even now to repeat such claims while observing a
tactful silence about the previous fiascos. Often the
stories even adhere to the same decades-old format: no, they
can't do these things just yet, but there are exciting
results that show they have taken the first step, and the
really big developments are just around the corner...
But having made the first step does not count for much in
attaining your goal if the path you are on is in fact a
blind alley, and the picture of human intelligence
underlying these 'hard AI' claims seems false enough to
guarantee such a misdirection. For the detailed arguments
against this 'fallacy of the first step', read the Dreyfus
material cited above. A critique that is strongly based on
recent physics (and is much gentler) has been given by Roger
Penrose (in The Emperor's New Mind), who also considers the
less extreme ('soft AI') positions, which are much more
defensible.
The extravagant claims quoted above have been well and truly
exploded by the passage of time. But it is only fair to
note the more sober picture that emerges if we turn to more
recent AI literature, such as found in The Foundations of
Artificial Intelligence: A Sourcebook (1990). It is true
that it still exhibits hangovers from the Unbounded Optimism
era:
"All fields discuss the nature of man. AI tries to do
something about it. [A] task of AI as a science is to
explain human intelligence... What is common, as
intelligent agents, between Einstein, the man-on-the-street,
the tribesman on a hunt... is that they all face very
similar computational problems..."
"[A] recent book by Minsky (one of the founders of AI)
offers computational models for phenomena as diverse as
conflict, pain and pleasure, the self, the soul,
consciousness, confusion, genius, infant emotion, foreign
accents, and freedom of will..."
But fantasy trips disguised as 'science', like these, now
face more realistic competition. Other writings in this
collection show that many AI workers are now making serious
efforts to define the nature and limitations of their field,
in the process grappling with fundamental and very difficult
questions.
A 'BRAIN' WITHOUT A BODY?
But what was wrong with the picture of 'intelligence' that
guided these past - and, unfortunately, present - trips into
fairyland? It is salutary to look at one line of contrary
argument, developed particularly by Hubert Dreyfus.
If human capabilities are to be equalled by a computer, they
must be based on the following of rules; computers can do
nothing else. Of course, sometimes we do resort to rules; a
learner driver will mutter: first neutral gear, then the
ignition key, then the handbrake off ... But once past the
novice stage, there is usually no sign that a conscious rule
is being followed. To rescue the 'humanoid computer'
project, one has to assume that all our skilled behaviour
comes from following a rule, whether we are conscious of the
rule or not. But this notion of 'unconscious rules' has
little to support it, and much against it. As Dreyfus
notes:
"The important thing about skills is that, although science
requires that the skilled performance be described according
to rules, these rules need in no way be involved in
producing the performance."
A computer needs rules because the objects it deals with
must be clearly defined, and what it is to do next must be
unambiguously written in its programme. The data available
to the programme is some well-delimited set chosen because
of its relevance to the programme's purpose - it is the
programmer, of course, who decides on that purpose, and
chooses the data which will be relevant to it. Thus the
computer operates in a small, self-contained, relatively
unpuzzling world. In contrast, we poor humans have to
lumber along in a world capable of infinite novelty and do
the best we can with it. We have to make do with objects
that we can make out only hazily, tentatively change our
criteria of relevance in response to new experience,
tentatively impose patterns - patterns that generally defy
analysis - in attempting to understand a situation.
Aware of such valuable features in the human psyche, AI
researchers have naturally shown great interest in
developing programmes that can learn. Early work here
relied strongly on behaviourist theories, in which human
learning is seen largely as a matter of simple stimulus-
response conditioning, reinforcement and excitation
frequency, all easily programmable. But a body of
experimental work has now put a large question mark over
such theories, and shown how some of the most elementary
conditioning results actually depend on the experimental
context and may differ according to the subject's choice.
Indeed, and even more remarkably, there is now evidence that
non-human and even non-mammalian animals may form patterns
and use them to shape their perception of the 'input
stimulus' variously, in an altogether similar way. If we
look at the experimental findings we will be inclined to
ask, not whether a computer could 'learn' like a human
being, but rather whether it could ever learn as effectively
as a pigeon.
We might even ask: will any computer ever have as much
common sense as a pigeon? For it has turned out - as the
discussion above might well lead us to expect - that common
sense is immensely harder to program than logical thought.
Grappling with this problem, Minsky writes:
"...common sense works so well not because it is an
approximation of logic; logic is only a small part of our
great accumulation of different, useful ways to chain things
together."
Demurring from Minsky's proposal to program these various
'chaining' methods, Terry Winograd observes in Thinking
Machines:
"The rules followed by the machine can deal only with the
symbols, not their interpretation... There are basic limits
to what can be done with symbol manipulation, regardless of
how many 'different, useful ways to chain things together'
one invents. The reduction of mind to the interactive sum
of decontextualized fragments is ultimately impossible and
misleading."
THE BODY: COSTS AND BENEFITS
But if our skills cannot generally be reduced to rules,
where do they come from? Dreyfus emphasizes in his account
how they originate in, and continually depend upon, bodily
experience. It is because we are embodied that we have
learnt - that we have had to learn - how to make sense out
of experiences that are different in quality and yet
simultaneous, what our eyes tell us and what our hands tell
us, for example. Originally, it is to negotiate our way
through the physical world that we form patterns with which
to organize it, and expectations based upon these patterns.
Equipped with these patterns and expectations, we can pass
most of the time in a world that is now familiar; and yet
the patterns and expectations have this extraordinary virtue
of remaining open to correction, so that new phenomena do
not necessarily leave us floundering. Thus we semi-automate
our responses, so that the sensory world does not present
itself as an arena we must permanently struggle to
understand, and we can cope without a constant drain on our
energy and attention. If any human quality deserves the
name of 'intelligence', it is hard to think of a better
candidate than what we thus start to develop: the ability to
cope.
As Dreyfus puts it in What Computers Can't Do:
"...[A]n embodied agent can dwell in the world in such a way
as to avoid the infinite task of formalizing everything...
these global forms of recognition are not open to the
digital computer, which, lacking a body, cannot respond as a
whole but must build up its recognition starting with
determinate details.."
Thus, in the situations where automated response is all that
is required - where objects are well-defined, the body of
'relevant' data is clearly circumscribed, patterns do not
need adapting, and expectations are never disappointed - we
can just coast along using programmes of response derived
from past experience.
It is in such fields of activity, where creative and complex
thought is not demanded, that formal logic and the computer
can thrive quite well. But such cases are rarer than
generally believed. For example, most people would probably
see mathematics as the archetype of the kingdoms where
formal logic holds sway; but this is often because they are
thinking of an unrepresentative case, that of arithmetic.
In mathematics generally, it is far different. As Penrose
(himself a mathematician) writes in The Emperor's New Mind:
"People might suppose that a mathematical proof is conceived
as a logical progression, where each step follows upon the
ones that have preceded it. Yet the conception of a new
argument is hardly likely actually to proceed in this way.
There is a globality and seemingly vague conceptual content
that is necessary in the construction of a mathematical
argument..."
(Of course, the argument can always be reconstructed as a
logical progression.)
What may seem a paradoxical conclusion emerges from this: a
major reason why the computer will never duplicate human
intelligence is that it has no body with which to meet the
world. This is where we come to some interesting
consequences for the computerizing of skills, craft skills
in particular.
CRAFT, SKILL AND 'VIRTUAL REALITY'
Craft relies - perhaps more heavily than 'high art' - on
those crucial features of human understanding that derive
from and depend upon bodily experience: how materials resist
or yield, what they feel like - the staggeringly complex
integration of sensory data and instinctual needs that is
implied by a phrase like 'security blanket'. A computer
might well be programmed to control machines that duplicate
or clumsily plagiarize an existing craft work, just as
colour photocopiers can be expected eventually to turn out
remarkably similar copies of the Mona Lisa. (According to
press reports, they are already supplying passable
imitations [literally!] of a fifty-dollar note.) But no
original pieces of creative craft need ever be expected to
roll off the numerically controlled production line. (It is
worth observing that the output from such a production line
need not be monotonously uniform. Numerical control can
easily avoid the perfect sphere or the rigidly straight
line, adding random deviations that are perceptible to the
eye or hand. If there is enough demand, those who just want
bumps for the sake of bumpiness will have factories catering
to their needs.)
It might be pertinent to suggest a special significance for
craft in today's world. That world is one where 'virtual
reality' threatens to substitute for reality. Face-to-face
human encounters are increasingly undermined and constricted
as they are replaced by mediated and abstract relationships;
'technocratic rationality', economic efficiency, the
inevitability of 'progress' are assumed to justify changes
that are often dubious, unwelcome to most people and even
horrendous. If 'high art' celebrates such developments, or
even presents them neutrally, it risks losing its expected
critical edge. And if we reject the ideology which insists
that such a daunting world is 'inevitable', we might detect,
and be glad to see, a critique at least tacit in craft work
which revives older experiences now in danger of being lost:
the feel of a blanket, the weight of stone - not in 'virtual
reality', but in one's hand.
Of course, such a critique can be either reactionary or
forward-looking. It may convey a naive desire for the past
to be re-created just as it was - for the lost virtues to
emerge in the same social surroundings as they once did, for
people to be close inside the (patriarchal) extended family
or the (narrow-minded and exclusionary) village community.
Or, on the other hand, it might express a desire to recover
what is lost or endangered but to situate it in a framework
of more expansively convivial relationships. That these can
be envisaged only dimly might be regarded as no excuse to
abandon the hope for them.
We might draw a salutary lesson from all this. Even if
'technological progress' has been made into an ideology, it
is certainly not a pure myth and we need to be alert to
those of its promises and threats which happen to be real;
but we also need to keep a sharp lookout for 'scientific'
con-jobs. Recall how, in the field of nuclear power, the
vision of 'electricity too cheap to meter' was dangled
before us to silence the doubters. Remember how freeways
were going to solve the problems of expanding road traffic.
And once upon a time, the benevolent scientific genie was
about to wave his wand and reduce the working week to twenty
hours, to ten hours...
The claims of AI researchers above are probably extreme
examples of such a ploy, and they are by no means innocent.
After considering the social role of AI in general, Terry
Winograd - one of the more enlightened leaders in the field
- offers a disturbing conclusion (his own italics): "[T]he
techniques of artificial intelligence are to the mind what
bureaucracy is to human social interaction."
The alleged march of the robots gives this bureaucratic
nightmare its intimidating clincher. If we are all nearing
our use-by date anyway, due for replacement by smarter, more
durable beings made from steel, carbon fibre and plastic,
then obviously there is little point in struggling to
preserve the merely human... Actually, if we felt unkind,
we could say that those who peddle such daydreams and
nightmares are refuting their own claims in the very act of
making them. Surely they are demonstrating that no machine
will ever out-do homo sapiens in the specifically human
skill of wanking.
Alan Roberts, originally a physicist, now researches in
theoretical ecology at Monash University.This article is
based on an paper to the craft conference 'Interventions',
July 1992. An earlier version appeared in the Winter 1992
issue of Artlink.
-----------------------------------------------------------
Arena Magazine is published six times a year by:
Arena Printing and Publishing Pty Ltd
35 Argyle Street, Fitzroy, 3065, Australia.
Email: <pwilken@peg.apc.org>.
The material in Arena Magazine is copyright. Permission is
given to republish articles, in either electronic or paper
form, so long as: (1) it is for non-profit purposes; (2) that
the text of all work remains intact (including this copyright
notice); and (3) notification is sent to Arena Publishing,
either by snail or email. Applications from commercial
publishers will be considered.